A Robust Countermeasures for Poisoning Attacks on Deep Neural Networks of Computer Interaction Systems
نویسندگان
چکیده
In recent years, human–computer interactions have begun to apply deep neural networks (DNNs), known as learning, make them work more friendly. Nowadays, adversarial example attacks, poisoning and backdoor attacks are the typical attack examples for DNNs. this paper, we focus on analyze three We develop a countermeasure which is Data Washing, an algorithm based denoising autoencoder. It can effectively alleviate damages inflicted upon datasets caused by attacks. Furthermore, also propose Integrated Detection Algorithm (IDA) detect various types of our experiments, Paralysis Attacks, Washing represents significant improvement (0.5384) over accuracy increment, help IDA those while Target makes it so that false positive rate reduced just 1% high detection greater than 99%.
منابع مشابه
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attack...
متن کاملPoison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Data poisoning is a type of adversarial attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores a broad class of poisoning attacks on neural nets. The proposed attacks use “clean-labels”; they don’t require the attacker to have any control over the labeling of training data. They are also ...
متن کاملA survey on RPL attacks and their countermeasures
RPL (Routing Protocol for Low Power and Lossy Networks) has been designed for low power networks with high packet loss. Generally, devices with low processing power and limited memory are used in this type of network. IoT (Internet of Things) is a typical example of low power lossy networks. In this technology, objects are interconnected through a network consisted of low-power circuits. Exampl...
متن کاملManipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms. In this paper, we perform the first systematic study of poisoning attacks and their countermeasures for linear regression models. In poisoning attacks, attackers deliberately influence the training data to manipulate the...
متن کاملAuror: defending against poisoning attacks in collaborative deep learning systems
Deep learning in a collaborative setting is emerging as a cornerstone of many upcoming applications, wherein untrusted users collaborate to generate more accurate models. From the security perspective, this opens collaborative deep learning to poisoning attacks, wherein adversarial users deliberately alter their inputs to mis-train the model. These attacks are known for machine learning systems...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied sciences
سال: 2022
ISSN: ['2076-3417']
DOI: https://doi.org/10.3390/app12157753